Should you believe that this coin is fair?

نویسنده

  • William Bialek
چکیده

Faced with a sequence of N binary events, such as coin flips (or Ising spins), it is natural to ask whether these events reflect some underlying dynamic signals or are just random. Plausible models for the dynamics of hidden biases lead to surprisingly high probabilities of misidentifying random sequences as biased. In particular, this probability decays as N −1/4 , so that no reasonable amount of data would be sufficient to induce the concept of a fair coin with high probability. I suggest that these theoretical results may be relevant to understanding experiments on the apparent misperception of random sequences by human observers. There is a large literature testifying to the errors that humans make in reasoning about probability [1, 2]. Perhaps most fundamental is the claim that people routinely detect order and hidden causes in genuinely random sequences [3]. These apparent limitations on human ratio-nality have broad implications, not least for economics, and have attracted considerable attention in the popular press. In contrast with these results, a number of experiments indicate that humans and other animals can change their behavior in response to changes in the probabilities of stimuli and rewards, sometimes making optimal use of the available data [4, 5, 6, 7]. Similarly, many perceptual discriminations approach the limits to reliability set by noise near the sensory input [8, 9, 10], and related ideas of statistical optimization have emerged in recent work on motor control [11, 12, 13]. There is even the suggestion that if the detection of order vs. random-ness is cast in the standard two–alternative format for perceptual discrimination experiments, then people can learn to perform with close to the statistically maximum reliability [14]. While nearly perfect neural processing of statistical data under some conditions could coexist with qualitative failures at similar problems under different conditions, it would be attractive to have a more unified view of the brain as an engine for probabilistic inference. The problem of identifying genuinely random sequences has a number of subtleties that seem not to have been emphasized in the previous literature. In particular , from a Bayesian point of view our confidence that a given sequence really was generated at random depends entirely on the universe of alternative models that we are willing to consider. Here I consider a family of models that involve time–dependent biases of unknown magnitude , similar in spirit to the models that would allow …

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Introduction: What Is Statistical Learning Theory?

Let us start things off with a simple illustrative example. Suppose someone hands you a coin that has an unknown probability θ of coming up heads. You wish to determine this probability (coin bias) as accurately as possible by means of experimentation. Experimentation in this case amounts to repeatedly tossing the coin (this assumes, of course, that the bias of the coin on subsequent tosses doe...

متن کامل

Can Optimally-Fair Coin Tossing Be Based on One-Way Functions?

Coin tossing is a basic cryptographic task that allows two distrustful parties to obtain an unbiased random bit in a way that neither party can bias the output by deviating from the protocol or halting the execution. Cleve [STOC’86] showed that in any r round coin tossing protocol one of the parties can bias the output by Ω(1/r) through a “fail-stop” attack; namely, they simply execute the prot...

متن کامل

On the Black-Box Complexity of Optimally-Fair Coin Tossing

A fair two-party coin tossing protocol is one in which both parties output the same bit that is almost uniformly distributed (i.e., it equals 0 and 1 with probability that is at most negligibly far from one half). It is well known that it is impossible to achieve fair coin tossing even in the presence of fail-stop adversaries (Cleve, FOCS 1986). In fact, Cleve showed that for every coin tossing...

متن کامل

Giving Your Knowledge Half a Chance

1000 fair causally isolated coins will be independently flipped tomorrow morning and you know this fact. I argue that the probability, conditional on your knowledge, that any coin will land tails is almost 1 if that coin in fact lands tails, and almost 0 if it in fact lands heads. I also show that the coin flips are not probabilistically independent given your knowledge. These results are uncom...

متن کامل

Bayesian theory

In classification, Bayes' rule is used to calculate the probabilities of the classes. The main aim is related about how we can make rational decisions to minimize expected risk. Bayes' theorem provides a way to calculate the probability of a hypothesis based on its prior probability, the probabilities of observing various data given the hypothesis, and the observed data itself. Probability and ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005